Next Generation Modeling: A Grand Challenge
نویسنده
چکیده
First, I define a few key dimensions of modeling, such as economy, culture and technology, and follow up with how these dimensions will change in the future, and where our research may lead. I conclude the paper with an overview of research we are performing to help move modeling forward by addressing some aspects of these dimensions. Our research aim is to demonstrate key problems in modeling, with partial solutions, along with addressing what is possible for the future to tackle the “grand challenges” of modeling. INTRODUCTION The field of Simulation is usually combined with the term “Modeling” to form “Modeling and Simulation” or perhaps “Simulation Modeling,” Regardless of these terminological differences, modeling plays a key role not only for simulation [1], but for science, engineering and art. To model is to create a structure that captures interesting or noteworthy attributes of a target object. The model of the RMS Queen Mary ship that I have on my shelf is a small metallic piece, and it allows me to experience the ship without setting foot on it. I can become familiar with something that is literally outside of my grasp. By making a target object smaller (as for the Queen Mary) or larger (as for a hexagonal benzene molecule), familiarity with the target is made possible. Modeling dynamical systems has many similarities with these scale models; we need to explore these connections now that technology has permitted us to construct scale models with greater efficiency and economy. Modeling is an amazingly diverse topic whose coverage can be found in just about every discipline. It differs from Analysis or Execution in that the topical area does not command as much attention in the Simulation literature as for Analysis (the mathematical and statistical study of dynamical system behavior) and Execution (the algorithmic analysis of computational efficiency). Modeling does pose major challenges, whose solutions will be felt far outside of the simulation sphere. People speak of models in many ways. A computer program could be seen as a model. Someone may say “I have created a model” but when put to the question of representation, it turns out that they have created a computer program. Then, the question becomes one of representation, which is central to modeling. What representational vehicle did they use, and was program the only representation, or did the model construction begin with other languages and forms and then undergo a sequence of transformations to yield the final program? The area of modeling is all about these forms and the construction process. Modeling is concerned what materials are used to make a structure, how one chooses these materials, and how one interfaces with them. It is a mistake to construe modeling in Computer Simulation to be fundamentally different than modeling found in other areas, such as scale models of the Queen Mary, Eiffel Tower, or a working miniature gas-powered turbine. Also, tasks such as verification and validation are also critical to the overall process of simulation, but they are not the core part of modeling per se; instead, they affect modeling and serve as one of several feedback mechanisms, which help to adjust model structure. One might say that the purpose of modeling is system validation, but apart from the observation that achieving validity is only one aspect of the process of modeling, it is like saying that the purpose of driving is to get to the store. Driving and modeling have their own particular knowledge, ontologies, and processes, bu the means and ends are separate. Modeling, as an interface, lies midway between the human doing the modeling and the thing being modeled. Media become paramount in discussions of model definition, style, aesthetics, and how models are crafted. HISTORY The subject of history might better termed timeline, where we look at the past, the present, and project into the future. For considering grand challenges for modeling, it may behoove us to begin with the future and then return back to the past to collect artifacts relevant to the modeling enterprise. What will history say of our modeling efforts today and in the future? Can we imagine a futuristic modeling environment? One way to imagine the future of modeling is to read books on science fiction and watch modern movies, with their ever-increasing dependence on the latest technologies in computer graphics and post-production techniques that blur the line between “reality” and “simulation.” Disney’s 1982 movie Tron envisaged a world inside of the computer, with users having their own representative avatars working against the Master Control Program (MCP). The MCP, whose counterpart in the real world, ran a large company, played the role of antagonist, while the protagonist aimed to shut down the evil MCP. The 3D, graphically-rendered computer landscape, very advanced for its time, showed a futuristic computing device, along with its working innards. Another popular TV series, Star Trek: The Next Generation, contained a special environment called the Holodeck. The Holodeck could reproduce the effects of any virtual scene, set of objects, or level of sensory immersion and engagement. In many ways, the Holodeck becomes the modeler’s nirvana since anything can be created and experienced. To program the Holodeck is to craft the ultimate simulation, one where it is not possible to differentiate between the Holodeck experience and “real life.” It was never made clear what Holodeck programs actually looked like, or how one created models in the Holodeck, but it seems doubtful that Holodeck programmers used paper, flat screens and sharp writing utensils. It would appear that part of our challenge is to figure out how to achieve similar tasks in modeling, with the assumption that our state of technology will improve to the point where the Holodeck will eventually become achievable. In fact, the Holodeck and similar futuristic features found in recent shows, books, and movies (such as Dr. Who, Gibson’s Neuromancer, and The Matrix film) help us to construct Gedanken (i.e., mind) experiments. For example, imagine that you are inside of a Holodeck and that it is not possible to ask for paper, a computer keyboard, mouse, or writing instruments. How will you model now that you have been deprived of your dependencies? Rather than being an easily dismissed fantasy, this sort of philosophical quandary should help to cast a new foundation for grand challenge modeling. We should and must envisage modeling as it would be on the Holodeck. Efforts that help us to answer these questions are likely to lead to partial solutions to the grand challenge questions that we pose for Modeling. For an example of what we might create in the Holodeck, Figure 1 and Figure 2 suggest ways in which we might design or observe TCP/IP networks. Elam and Hanberger [2] created a 3D movie with visual imagery accentuating the flow of packets through routers. There are two ways of looking at this work: a visualization of more abstract phenomena, or as a precursor to modeling research for local area networks and the Internet. Figure 1: TCP/IP router, which directs IP packet flow, Courtesy of Gunilla Elam, Ericsson Media Lab, 1999. Figure 2: Packets being routed over the Internet, Courtesy of Gunilla Elam, Ericsson Media Lab, 1999. Some of the difference depends on an ontological status with which we endow certain objects. Perhaps, these figures represent the “real” router models, whereas the flat models are typographic or diagrammatic visualizations? If we return to the past by winding back the clock 5000 years or so, our first human attempts at modeling would have been small figurines and objects, likely of religious or ritual importance. Archeological digs of the Middle East are strewn with such finds, representing spawning grounds for human civilization. The clay tokens in Babylonia were originally used for accounting purposes [3] where they resembled the items bartered or exchanged (sheep, oil, grain). The tokens were placed inside clay envelopes, the ancient equivalent of modern sealed paper envelopes and digitally watermarked data. As the envelopes had to be frequently opened, new wedge-shaped marks were made on the outside of the clay, representing our first written language. Once cuneiform had its scribes and artisans, the tokens fell into disuse since it became inefficient to make these costly contents. The clay figurines and tokens were some of our first models. While we have progressed into more stylistic and flatter forms of model representation, this progression is likely due to cost efficiencies introduced by the new technologies. Virtual clay tokens and figurines would have worked wonders in the Holodeck, but the use of them was less efficient than cuneiform, and more expensive in labor. We must carefully consider all tradeoffs from the past, before imagining that representations evolved due to more ideal forms. Our forms advanced to accommodate existing technologies. TECHNOLOGY Given what we may strive for in terms of technological progress, we should emphasize model definition and construction in cutting edge environments, capable of supporting a truly engaging and immersive experience. Thus, modeling efforts centered on various new forms of computing are to be encouraged. We have come a long way from the movie Tron, to where we can simulate almost photo-realistic real-time objects and action on an inexpensive home computer. The problems come into play when considering the limited immersion gained by these displays. We realize that we are outside of the environment, and not really part of it, and that we cannot smell, feel, or touch objects. The following technologies, with new ones emerging on an annual basis, are virtual reality, augmented reality, ubiquitous computing, pervasive computing, and rapid prototyping. Virtual reality has been with us for some time as a buzzword, and it represents a move away from the 2D display toward a more engaging set of interface hardware, such as data gloves and eye goggles. Augmented reality is associated with a combined virtual-real interface without a clear line being drawn between what is artificially created and what represents living, breathing matter. Ubiquitous and pervasive computing represent a significant departure away from the traditional keyboard interface where computing elements and processes are everywhere, embedded in everything. Rapid prototyping captures the ultimate effects of computer aided manufacturing to a point where one achieves the rapid synthesis of physical objects from base material such as resin. Nanoengineering has similar promise at the molecular level. These technologies support next generation modeling, but they do not define it. We still require a new type of modeling framework if we are to take advantage of these technologies. Mass customization and personalized interfaces are starting to gradually replace mass production and “one interface fits all.” Technological advances are making it possible to customize media for small groups, and even individuals, to make it easier to people to achieve their goals and products. This movement in the marketplace has already found its way into web media in the form of skinz and flexible interface tools. CULTURE Though our technologies are becoming ever faster and richer with expression, we still model in science and engineering as if we had only papyrus and sharpened reeds. We have developed electronic papyri, even in the emerging form of “digital ink,” and yet visions of the Holodeck hint at greater aspirations involving objects instead of a flat substrate. Our acquired technological culture; however, is founded upon mathematical notation that has served us well over the past two and a half millennia. The notation has changed, but its essence remains not that different than the much older forms in Greece, Egypt, and Babylonia.. To leverage culture is to celebrate diversity in representation, and to realize that no single representation is appropriate for every problem, or for every modeler. Given the economically viable state of affairs for personalization and customization, it should not come as too much of a surprise that this can have a dramatic effect on the practice of modeling. For decades, modelers have rallied around specific modeling cultures and techniques, each with their own journals, conferences, and software. I see this pluralism as a healthy phenomenon. Modeling is not a science, where the goal may be to achieve a unified representation or set of laws, but is instead an art form. Personalization and customization may at first seem to be at odds with standardization of model forms and interfaces. Yet, there are several points regarding standardization that allow standards to naturally coexist with cultural products. The first observation is that standards are made by groups and groups can choose forms of interest to them—that is, there can be competing standards, with successful model types and standards succeeding by a process of natural selection. The second observation is that, with the right level of technology, the standard need not impinge upon the human interface itself. For example, I can drive an automobile regardless of its shape, style and aesthetics since I know I can locate the wheel and pedals. So, items such as the wheels and pedals become embedded in a standard. The Star Trek communicator hints at the proper role played by standardization: at a level beneath the cultural interface. The communicator, in its futuristic role, allows anyone to talk to anyone else through the appropriate translations, all of which are handled by the communicator hardware and software. In other words, interface standards are an economic replacement for cultural interfaces and their associated transformations. Why should I care of an interface standard when the communicator worries about this for me? The basic elements of the communicator are with us in the new developments of VoiceXML and AltaVista’s Babelfish natural language translator. Let the lower level machinery worry about the standard, so that I can concentrate on my cultural interface. ECONOMY In a democratic society where consumers and individuals are free to choose products and media, it is the market that ultimately dictates whether a modeling framework will succeed or fail. The adoption of cuneiform over tokens in Babylonia was fundamentally an economic choice, where tradeoffs with artistic craft may have existed. Another economic issue involving modeling is with the concept of abstraction. Abstraction is claimed to have key importance in both mathematics and computer science, and for good reason. Abstraction provides us with economic methods for organizing our thinking. Abstraction can have several connotations. If I say that a particular conservation law of physics is abstract, I am not referring to how much ink, paper, or marble is used to represent the law. Instead, I am speaking of the economy of the relation underlying all isomorphically equivalent representations. Newton’s law f=ma would be as true and as economically presented if it were represented with billiard balls or house-sized buildings. The only reason that it is represented on processed wood pulp is due to the economy of materials and labor. Thus, abstraction is indeed a form of economy, but one should not misconstrue this to mean a minimal set of atoms. To take the idea of minimalism to the Platonist extreme, this would mean that we would avoid the senses in realizing truth in representation. When mathematicians laud the use of abstraction and abstract thinking, presumably they refer to the very idea of what it means to be an equation or a set. They refer more to the symbolic quality of mathematics, that one symbol can represent anything and that a multiplicity of symbols when juxtaposed correctly, may take on entirely new meanings. Aesthetics plays a key role in adjusting for cultural needs and wants. Just because meaning lies in the relation among objects, and not in the objects themselves, is not a good reason to shelve aesthetics. Instead, the situation is just the opposite. Since meaning lies in object relations, we need sensory immersion and cultural interfaces to get the most out of a model representation. We need to better surface model structure to enhance our levels of immersion, engagement, comprehension, motivation, and memory. AESTHETIC COMPUTING Aesthetic Computing [4, 5] is the study of artistic representations of formal models as found in computing and mathematics. The area is a convergence of trends and practices that suggest that future models will be less expensive to reproduce, be more engaging, and be customized for a specific task or person. We are developing a system called rube [6]. The purpose of rube is to facilitate new model types using technologies currently at our disposal. Model specification is decoupled from model presentation so that formal structures can be viewed or heard using many different styles and aesthetics. This customization is made possible mainly through the extensible markup language (XML) and work that uses XML as a basis. For example, we use the extensible 3D language (X3D) to represent 3D scenes, and we are creating two new XML dialects: MXL and DXL to facilitate dynamic model construction, as required in the computer simulation community. MXL (Multimodeling eXchange Language) has elements and structure that closely matches today’s existing model types used for simulation. The Schema for MXL, for example, contains namespaces for finite state machines, Petri nets, and functional block models. DXL (Dynamics eXchange Language) will be used as an assembly language level for MXL, and so it represents primitive functional components not unlike those of a digital circuit. The use of multimodeling blends well with XML and aesthetics, since the use of multiple models accedes to the fact that multiple modeling styles can coexist and cooperate. Multimodels are defined by models whose components are subdefined in terms of other models. It is not only a matter of enabling coupling and hierarchy in modeling, but of encouraging and accepting heterogeneous styles, with a wide landscape for aesthetics. To illustrate sample X3D scenes for a Finite State Machine (FSM), consider Figure 3 through Figure 6, which illustrate customized versions or styles, for the same 3 state FSM. Figure 3 displays the formal structure of a Finite State Machine containing 3 states and 3 directed transitions, all triggered by a binary input of one. When zero is received by the FSM, the FSM stays in its current state, with S1 being the start state. Figure 4 through Figure 6 display alternative aesthetics to the diagrammatic one in Figure 3, but the underlying MXL files are equivalent to that Figure 3. Figure 4 displays primitive solids, not unlike fairly recent 3D programming language studies in the form of Najork’s Cube language [7]. Figure 5 and Figure 6 represent fluid flow and agent-based aesthetics. Figure 7 and Figure 8 display a more complex program [8], a simple multi-tasking operating system, with programs being anthropomorphically presented as colored avatars, and OS resources being represented by people acting as servers behind desks. Different floor paths are defined to allow for avatars to queue behind each resource. Figure 3: FSM with 3 states and 3 transitions Figure 4: FSM using primitive solids Figure 5: FSM using a fluid flow metaphor Figure 6: FSM using an agent metaphor Figure 7: Operating System with tasks and resources Figure 8: Close-up view of the CPU resource with task
منابع مشابه
Looking into the Future of Air Transportation Modeling and Simulation: A Grand Challenge
The future of air transportation modeling and simulation represents a grand challenge for both the air transportation and the modeling and simulation communities. In addition to its size and scope, air transportation has many technical, operational, organizational, policy, security and legal aspects contributing to its complexity, yet must ensure an un-rivaled level of safety throughout any tra...
متن کاملResearch Challenge on Opinion Mining and Sentiment Analysis
Draft Background The aim of this paper is to present an outline for discussion upon a new Research Challenge on Opinion Mining and Sentiment Analysis. This research challenge has been developed in the scope of project CROSSOVER “Bridging Communities for Next Generation Policy-Making” in the view of the definition of a new Research Roadmap on ICT Tools for Governance and Policy Making, building ...
متن کاملBuilding the next Generation Biological Information Infrastructure
The grand challenge for the 21st century is to harness the accumulating knowledge of Earth’s biodiversity and the ecosystems that support it. To accomplish this, we must mobilize biological information assemble it, organize it, and deliver it with dramatically increased capacity. We must elevate the global biological information infrastructure to a new level of capability a “next generation...
متن کاملWork in progress - Engineering's Grand Challenges: An analysis of student viewpoints
As the next generation of engineers, engineering students at the local level have specific views on what key issues should be addressed in the coming decades. Results of a University of Kansas survey completed by over 220 engineering students from sophomores to graduate students in seven engineering majors from five departments are reported, analyzed, and compared with the 14 Grand Challenges f...
متن کاملA Grand Convergence in Mortality is Possible: Comment on Global Health 2035
The grand challenge in global health is the inequality in mortality and life expectancy between countries and within countries. According to Global Health 2035, the Lancet Commission celebrating the 20th anniversary of the World Development Report (WDR) of 1993, the world now has the unique opportunity to achieve a grand convergence in global mortality within a generation. This article comments...
متن کاملImprovement of Rule Generation Methods for Fuzzy Controller
This paper proposes fuzzy modeling using obtained data. Fuzzy system is known as knowledge-based or rule-bases system. The most important part of fuzzy system is rule-base. One of problems of generation of fuzzy rule with training data is inconsistence data. Existence of inconsistence and uncertain states in training data causes high error in modeling. Here, Probability fuzzy system presents to...
متن کامل